Hi, this is Canyu Chen (陈灿宇), a third-year Computer Science Ph.D. student at Illinois Institute of Technology (IIT) since Fall 2021, advised by Prof. Kai Shu. Before joining IIT, I received my B.S. in Computer Science from the University of Chinese Academy of Sciences (UCAS) in 2020.
I focus on Truthful, Safe and Responsible Large Language Models with the applications in Social Computing and Healthcare. I have started and currently lead the LLMs Meet Misinformation initiative, aiming to combat misinformation in the age of LLMs. I aim to pursue Safe and Aligned Artificial General Intelligence in the long run. I am always happy to chat and discuss potential collaborations, or give talks about my research in related seminars. Feel free to contact me via email (cchen151 AT hawk.iit.edu) or wechat (ID: alexccychen).
News
- [04/2024] Our survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges is accepted to AI Magazine 2024. [paper list]
- [03/2024] Deeply honored and humbled to receive the prestigious 🏆 Sigma Xi Student Research Award 2024 from Illinois Tech and the local Sigma Xi chapter. Thanks to Illinois Tech Today for the coverage.
- [02/2024] New preprint is online Can Large Language Model Agents Simulate Human Trust Behaviors?, [project website] Code and results have been released for verification. [code and results] Demos on HuggingFace: [Trust Game Demo] [Repeated Trust Game Demo].
- [01/2024] Can LLM-Generated Misinformation Be Detected? is accepted to ICLR 2024, [project website] [dataset and code].
- [12/2023] Honored to receive 🏆 Didactic Paper Award (1/35 of all accepted papers) in workshop ICBINB@NeurIPS 2023 for Can LLM-Generated Misinformation Be Detected?.
- [10/2023] Start an initiative LLMs Meet Misinformation along with a new survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges, [project website] and a paper list collecting related papers and resources [paper list].
Older News
- [10/2023] Honored to be covered by Illinois Tech News on the research of Trustworthy AI, [IIT News].
- [09/2023] New preprint is online Can LLM-Generated Misinformation Be Detected?, [project website]. The dataset and code are released [dataset and code].
- [06/2023] Will attend FAccT 2023 as a volunteer. Welcome to Chicago and glad to connect!
- [05/2023] One paper accepted at EACL 2023 and will attend online. Welcome to our poster!
- [04/2023] Glad to be invited by Prof. Lu Cheng to give a talk on AI Fairness at UIC [Slides]
- [11/2022] Attend NeurIPS 2022 in person. See you at New Orleans!
- [08/2022] Attend KDD 2022 in person. Glad to meet old friends and make new friends!
Publications
2024
-
Combating Misinformation in the Age of LLMs: Opportunities and Challenges
Canyu Chen, Kai Shu.
Published in AI Magazine 2024 .
[arXiv] [project website] [paper list]
Media Coverage : [Marktechpost AI Research News] [Reddit r/machinelearningnews] [Analytics Vidhya Blog].
Invited Talks : [Psych Methods].
-
Can Large Language Model Agents Simulate Human Trust Behaviors?
Chengxing Xie*, Canyu Chen*, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi, Ziniu Hu, Philip Torr, Bernard Ghanem, Guohao Li. (*equal contributions)
Appear in workshop AGI@ICLR 2024 and NLP+CSS@NAACL 2024
Seventeenth Midwest Speech and Language Days Symposium ( MSLD 2024, Oral ).
[arXiv] [project website] [code and results]
Demos on HuggingFace: [Trust Game Demo] [Repeated Trust Game Demo] -
Can LLM-Generated Misinformation Be Detected?
Canyu Chen, Kai Shu.
Published in Proceedings of The Twelfth International Conference on Learning Representations ( ICLR 2024 )
Also appear in workshop RegML@NeurIPS 2023, Oral and ICBINB@NeurIPS 2023, spotlight .
[arXiv] [project website] [dataset and code] [zhihu] [twitter/x.com] [LinkedIn]
🏆 Award: Didactic Paper Award in the workshop ICBINB@NeurIPS 2023 (1/35 of all accepted papers).
🏆 Award: Spotlight Research Award in the symposium AGI Leap Summit 2024.
🏆 Award: Third Place Award in the Illinois Tech College of Computing Poster Session 2024 (Ph.D. Group).
Included in the curriculum at: [The City University of New York].
Media Coverage : [The Register] [LLM Security] [Blog 1] [Blog 2].
Invited Talks : [AGI Leap Summit Spotlight Research Talk] [Tsinghua AI Time] [Psych Methods]. -
Can Large Language Models Identify Authorship?
Baixiang Huang, Canyu Chen, Kai Shu.
arXiv preprint. Mar. 2024.
[arXiv] [code] -
Introducing v0.5 of the AI Safety Benchmark from MLCommons
MLCommons AI Safety Working Group
arXiv preprint. Apr. 2024.
[arXiv] [official blog]
Media Coverage : [IEEE Spectrum] [AK Daily Papers] [Marktechpost] [AI Business] [EnterpriseAI News] [HPCwire] [Hackster.io] [ELBLOG.PL] [SiliconANGLE] [GoatStack.ai].
2023
-
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot Learners.
Canyu Chen, Kai Shu.
Published in Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics ( EACL 2023, Main Conference Long Paper)
Also appear in workshop ENLSP@NeurIPS 2022, Oral (spotlight) .
[arXiv] [code] [youtube] [bilibili] [slides] [poster] -
Fair Classification via Domain Adaptation: A Dual Adversarial Learning Approach.
Yueqing Liang, Canyu Chen, Tian Tian, Kai Shu.
Published in Frontiers in Big Data 2023 .
[paper] [arXiv] -
Attacking Fake News Detectors via Manipulating News Social Engagement.
Haoran Wang, Yingtong Dou, Canyu Chen, Lichao Sun, Philip S. Yu, Kai Shu.
Published in Proceedings of The ACM Web Conference 2023 ( WWW 2023 ).
[arXiv] [code]
Media Coverage : [Montreal AI Ethics Institute]. -
MetaGAD: Learning to Meta Transfer for Few-shot Graph Anomaly Detection.
Xiongxiao Xu, Kaize Ding, Canyu Chen, Kai Shu.
arXiv preprint. May. 2023.
[arXiv]
2022
-
Combating Health Misinformation in Social Media: Characterization, Detection, Intervention, and Open Issues.
Canyu Chen*, Haoran Wang*, Matthew Shapiro, Yunyu Xiao, Fei Wang, Kai Shu. (*equal contributions)
arXiv preprint. Nov. 2022.
[arXiv] -
When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes.
Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu.
Appear in workshop TSRML@NeurIPS 2022 and AFCP@NeurIPS 2022.
[arXiv] [Video] [Slides] [Poster]
Media Coverage : [Illinois Tech News].
-
Artificial Intelligence Algorithms for Treatment of Diabetes.
Mudassir M. Rashid, Mohammad Reza Askari, Canyu Chen, Yueqing Liang, Kai Shu, Ali Cinar.
Published in Algorithms 2022 .
[Paper] -
BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs.
Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu.
Published in Proceedings of the 36th Conference on Neural Information Processing Systems ( NeurIPS 2022 ), Datasets and Benchmarks Track.
[arXiv] [code]
Talks
-
[04/18/2023] Fairness in AI: An Introduction at UIC
[Slides]
Awards
- Travel Award for Seventeenth Midwest Speech and Language Days (MSLD 2024)
- Sigma Xi Student Research Award 2024 from Illinois Tech and the local Sigma Xi chapter. ( An award of $500 is given each year to up to two graduate students at Illinois Tech who have demonstrated significant promise in research and scholarship through their accomplishments. There is only one awardee across the whole university in 2024. )
- Technical AI Safety Fellowship 2024 Spring from Harvard AI Safety Student Team.
- Third Place Award in the Illinois Tech College of Computing Poster Session 2024 (Ph.D. Group).
- Spotlight Research Award in the symposium AGI Leap Summit 2024.
- Didactic Paper Award (1/35 of all accepted papers) in the workshop ICBINB@NeurIPS 2023.
- NeurIPS 2023 Volunteer Award.
Coverage
- Illinois Tech Today: "Recognizing the Outstanding Work of Our Illinois Tech Faculty"
- Marktechpost AI Research News: "This AI Report from the Illinois Institute of Technology Presents Opportunities and Challenges of Combating Misinformation with LLMs"
- The Register: "It's true, LLMs are better than people – at creating convincing misinformation"
- Illinois Tech News: "Breaking Biases"
- Montreal AI Ethics Institute: "Attacking Fake News Detectors via Manipulating News Social Engagement"
- IEEE Spectrum: "Announcing a Benchmark to Improve AI Safety: MLCommons has made benchmarks for AI performance—now it's time to measure safety"